46 research outputs found

    Optimizing monitorability of multi-cloud applications

    Get PDF
    When adopting a multi-cloud strategy, the selection of cloud providers where to deploy VMs is a crucial task for ensuring a good behaviour for the developed application. This selection is usually focused on the general information about performances and capabilities offered by the cloud providers. Less attention has been paid to the monitoring services although, for the application developer, is fundamental to understand how the application behaves while it is running. In this paper we propose an approach based on a multi-objective mixed integer linear optimization problem for supporting the selection of the cloud providers able to satisfy constraints on monitoring dimensions associated to VMs. The balance between the quality of data monitored and the cost for obtaining these data is considered, as well as the possibility for the cloud provider to enrich the set of monitored metrics through data analysis

    ThermoSim: Deep Learning based Framework for Modeling and Simulation of Thermal-aware Resource Management for Cloud Computing Environments

    Get PDF
    Current cloud computing frameworks host millions of physical servers that utilize cloud computing resources in the form of different virtual machines. Cloud Data Center (CDC) infrastructures require significant amounts of energy to deliver large scale computational services. Moreover, computing nodes generate large volumes of heat, requiring cooling units in turn to eliminate the effect of this heat. Thus, overall energy consumption of the CDC increases tremendously for servers as well as for cooling units. However, current workload allocation policies do not take into account effect on temperature and it is challenging to simulate the thermal behavior of CDCs. There is a need for a thermal-aware framework to simulate and model the behavior of nodes and measure the important performance parameters which can be affected by its temperature. In this paper, we propose a lightweight framework, ThermoSim, for modeling and simulation of thermal-aware resource management for cloud computing environments. This work presents a Recurrent Neural Network based deep learning temperature predictor for CDCs which is utilized by ThermoSim for lightweight resource management in constrained cloud environments. ThermoSim extends the CloudSim toolkit helping to analyze the performance of various key parameters such as energy consumption, service level agreement violation rate, number of virtual machine migrations and temperature during the management of cloud resources for execution of workloads. Further, different energy-aware and thermal-aware resource management techniques are tested using the proposed ThermoSim framework in order to validate it against the existing framework (Thas). The experimental results demonstrate the proposed framework is capable of modeling and simulating the thermal behavior of a CDC and ThermoSim framework is better than Thas in terms of energy consumption, cost, time, memory usage and prediction accuracy

    A manifesto for future generation cloud computing: research directions for the next decade

    Get PDF
    The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing

    Critical analysis of vendor lock-in and its impact on cloud computing migration: a business perspective

    Get PDF
    Vendor lock-in is a major barrier to the adoption of cloud computing, due to the lack of standardization. Current solutions and efforts tackling the vendor lock-in problem are predominantly technology-oriented. Limited studies exist to analyse and highlight the complexity of vendor lock-in problem in the cloud environment. Consequently, most customers are unaware of proprietary standards which inhibit interoperability and portability of applications when taking services from vendors. This paper provides a critical analysis of the vendor lock-in problem, from a business perspective. A survey based on qualitative and quantitative approaches conducted in this study has identified the main risk factors that give rise to lock-in situations. The analysis of our survey of 114 participants shows that, as computing resources migrate from on-premise to the cloud, the vendor lock-in problem is exacerbated. Furthermore, the findings exemplify the importance of interoperability, portability and standards in cloud computing. A number of strategies are proposed on how to avoid and mitigate lock-in risks when migrating to cloud computing. The strategies relate to contracts, selection of vendors that support standardised formats and protocols regarding standard data structures and APIs, developing awareness of commonalities and dependencies among cloud-based solutions. We strongly believe that the implementation of these strategies has a great potential to reduce the risks of vendor lock-in

    Performance evaluation of live virtual machine migration in SDN-enabled cloud data centers

    Get PDF
    In Software-Defined Networking (SDN) enabled cloud data centers, live VM migration is a key technology to facilitate the resource management and fault tolerance. Despite many research focus on the network-aware live migration of VMs in cloud computing, some parameters that affect live migration performance are neglected to a large extent. Furthermore, while SDN provides more traffic routing flexibility, the latencies within the SDN directly affect the live migration performance. In this paper, we pinpoint the parameters from both system and network aspects affecting the performance of live migration in the environment with OpenStack platform, such as the static adjustment algorithm of live migration, the performance comparison between the parallel and the sequential migration, and the impact of SDN dynamic flow scheduling update rate on TCP/IP protocol. From the QoS view, we evaluate the pattern of client and server response time during the pre-copy, hybrid post-copy, and auto-convergence based migration

    SHYAM: A system for autonomic management of virtual clusters in hybrid clouds

    Get PDF
    none2noWhile the public cloud model has been vastly explored over the last few years to face the demand for large-scale distributed computing capabilities, many organizations are now focusing on the hybrid cloud model, where the classic scenario is enriched with a private (company owned) cloud – e.g., for the management of sensible data. In this work, we propose SHYAM, a software layer for the autonomic deployment and configuration of virtual clusters on a hybrid cloud. This system can be used to face the temporary (or permanent) lack of computational resources on the private cloud, allowing cloud bursting in the context of big data applications. We firstly provide an empirical evaluation of the overhead introduced by SHYAM provisioning mechanism. Then we show that, although the execution time is significantly influenced by the intercloud bandwidth, an autonomic off-premise provisioning mechanism can significantly improve the application performance.openLoreti, Daniela; Ciampolini, AnnaLoreti, Daniela; Ciampolini, Ann

    Implementing virtual machine:a performance evaluation

    No full text
    A hypervisor is a hardware virtualization technique that allows multiple guest operating systems to run on a single host machine at the same time. Each Virtual Machine (VM) or known as guest operating system emulates all interfaces and resources of a real computer system. Virtualization is beneficial as one of the educational tools to facilitate students’ hands-on experiences and research activities. However, the performance of VM needs to be taken into consideration. We investigate the performance of a set of VMs using Oracle VirtualBox on several host machines, each of which has its own system specifications. We observe the resource utilization of each host machine in terms of its CPU utilization, CPU speed as well as memory usage. Experimental results show that the CPU utilization averages are 51.78%, 60.7% and 62.57% for cases before memory allocation, 1/2 of memory capacity and 2/3 of memory capacity, respectively. It is indicate that the utilization of a host processor is directly proportional to the memory capacity assigned for a virtual machine
    corecore